3-Computer Science-System Analysis

systems analysis

Systems analysis {systems analysis} {system analysis} {system design} tracks inputs, processing, and outputs.

theories

Artificial intelligence, artificial life, catastrophe theory, chaos theory, complexity theory, computational complexity theory, cybernetics, dynamical systems theory, evolution theories, experimental mathematics, fractal geometry, general systems theory, nanotechnology, scientific computing, self-organization, and statistical mechanics are theories about systems and can be analysis tools.

changing

To change system, change weakest part first. Change only one step or thing, then test system behavior [Kampis, 1991].

matrix

Matrices indicate part-pair relation and interaction strength. Hypermatrices can show secondary interactions and synergisms.

iterations

Systems can repeat processes, to converge on solutions. Fast systems, algorithms, and processes are unlikely to use loops and iterations.

goal-driven system

Goals are ideal states to approach or bad states to avoid. Programs measure progress toward goal after each step. Systems {goal-driven system} can try to minimize difference between current situation {origin, state} and desired situation {destination, state}. Systems can try to maximize difference between current situation and undesired situation. Goal-driven systems typically have subgoals and subsystems that lead to main goal.

To reach goal, systems usually use association and recall, rather than logic or statistics.

interactive system

Systems {interactive system} can be complex, chaotic, and unpredictable. They can self-organize, because interactions assume stable patterns. Other parts can assume damaged-part or overloaded-part functions.

operations research

Data flows from input, to data capture, to data processing, to output, to output distribution {operations research}|. At each step, there must be error prevention, error detection, error correction, security, backup, and controlled access.

categoricity

Theories can describe pure structure categories {categoricity}, with no reference to physical objects or events. Systems with that structure category are isomorphic to all other structures with that structure category.

3-Computer Science-System Analysis-Definition

system by definition

Independent object and relation sets {system, definition} can have limited and special interactions with outside.

outside

Systems {open system} can interact with outside world and take in or give out negentropy. Open systems can reach same state in different ways. Systems {closed system, analysis} can be separate from outside world.

objects and properties

System objects and events have properties, which have values. Systems, codes, or machines have one or more independent elements and element types. For example, elements can be numbers, letters, words, picture elements, sounds, tastes, smells, pressures, emotions, and signals. Elements can have sequences, patterns, or structures. Element properties can have value ranges.

relations

Relations determine how property values change. Real systems have forces and transfer energy. Interactions among system parts define system functions, goals, or dynamics. Parts can exchange data to provide input to one part from other-part output {communication, system}. Parts can combine to perform subfunctions {cooperation, system}. Parts can increase other-part activities or amounts by positive feedback. Parts can decrease other-part activities or amounts by negative feedback. Series can use output as input in chain reactions.

states

At all times, system objects have property values.

purpose

System typically describe action or how to perform action. Systems describe how to manage data. Goals are ideal states, in which variables have optimum values. Systems can try to maximize, minimize, or optimize variable values.

process

Systems select input data, represent data, process data, and output data. Processing stores, recalls, connects, and compares data. Given inputs, outputs have probability. Systems approach, avoid, use, control, or act on subsystems or environment. Systems select from choices. Systems recognize objects or events. Systems respond to input and/or state. Systems have stored facts and actions to use as resources.

problems

Systems with too many interactions are fragile. Systems with too many similar interactions are too redundant to adapt to novel situations. Systems must have enough time to process information to avoid mistakes and act correctly [Lévi-Strauss, 1966] [Lévi-Strauss, 1969] [Lévi-Strauss, 1985].

formal system

Information processing produces output from input using functions and symbols {formal system, analysis}, without using intentions or intentionality.

metron

Elements can move or change independently. Independent elements have degrees of freedom {logon}. Elements have weights or probabilities, which form metrics {metron}.

operator in system

Systems have one or more operations or relations {operator, system} among elements. Operators can start with existing elements and make same element types or make new element types.

types

Operators can be negating {unary operator}; adding, multiplying, linking, associating, and surrounding {binary operator}; association, chord, and color making {tertiary operator}; or object making and scene making {n-ary operator}.

relations

Familial relations can be parent-child, sister-sister, cousin-cousin, and so on. Object relations can be whole-part. Spatial relations can be above-below, left-right, front-back, inside-outside, and so on. Temporal relations can be before-after.

3-Computer Science-System Analysis-Outputs

equivalent output

Systems can have the same output {equivalent output} as other systems. First, identifier module samples system input and output to identify system parameters that model system. Second-system control module uses system parameters that model first system and feedback signals from second system to add control signal to second-system output.

identification problem

Two different systems can have same outputs for same inputs, so people cannot know machine nature from inputs and outputs alone {identification problem}.

3-Computer Science-System Analysis-Model

model

Ordered sets {model, system} {software model}, of various elements, can represent object or process properties. Models simulate positions, motions, times, momenta, energies, and interactions. Models can be physical or mathematical but have actual physical actions. Model elements have or represent points, lines, angles, surfaces, and/or solids, so models have geometric relations. Typically, model shapes are similar to physical shapes. Model and physical metrics are proportional. Model motions represent physical motions. Typically, model motions are similar to physical motions. Function models have moving parts that change positions and motions to represent events and transfers among states.

purposes

Constructing models can find what is important and not important, sharpen definitions and categories, identify procedure steps, identify structure parts, test interactions, sharpen boundaries, make categories, and uncover new relations, meanings, and properties. Constructing models adds information that causes understanding.

Models can represent system views. Models try to include constancies in real system and relations among elements. Models can simulate one description level or one input/output signal class. Models can measure information flows and constraints. Models can test knowledge or predictions and can reveal new object or process knowledge.

symbol grounding

Models can have parts and functions that relate to physical parts and functions {symbol grounding, model}.

black box

Systems {black box}| can have only inputs and outputs, with no process explanation.

computation theory

Models {computation theory} must specify what to compute.

formal model

Models {formal model} can put relations and components into symbols. Solutions come from mathematical analysis.

scale model

Models {scale model}| {replica} can copy larger-object shapes.

simulation on computer

Models {simulation, computer} can put relations and components into symbols. Computers find iterated or statistical solutions [Pellionisz and Llinas, 1982].

3-Computer Science-System Analysis-Learning

learning in systems

Systems can learn {learning, machine}, if parts or part relations can alter. Learning allows new states and/or trajectories. Learning requires mechanisms that can change system relations or rules. Learning requires input information. Separate evaluation function indicates success or failure in performing system function.

competitive learning

Networks {competitive learning} can use units that inhibit nearby units, creating competition among units.

constraint satisfaction

Networks can adjust connection strengths among nodes {constraint satisfaction}|, using feedforward and feedback, to find complete and consistent input-property interpretations.

conspiracy effect

If new pattern is similar to training patterns, new pattern enhances all network nodes {conspiracy effect}. New pattern unrelated to training patterns degrades training.

3-Computer Science-System Analysis-Network

association graph

Networks can link nodes by associations {association graph}, with or without scaling.

connectedness in networks

Networks can have only one or two inputs to each node {sparsely connected}. Networks can have four or more inputs to each node {densely connected} [Kanerva, 1988] {connectedness}.

feedforward network

Networks {feedforward network} can have interconnected units that adjust connection strengths to store processes or representations. Input-layer nodes receive intensities. Middle-layer nodes connect to all input nodes and to all output nodes. Nodes at same level do not interact. Output layer indicates answers by output pattern at nodes. Output from hidden units logarithmically relates to input.

McCulloch-Pitts neuron

Neuron models {McCulloch-Pitts neuron} can use linear threshold logic units. It strengthens synapse if input fails to fire neuron when expected, or weakens synapse if input fires neuron when not expected. Networks can use McCulloch-Pitts neurons to store representations, match input to representations, and send output.

pandemonium network

Networks {pandemonium network} {contention scheduling network} {winner-take-all network} depend on competition among processes {demon}, until only one process is still active. Representations try to inhibit all other representations. The strongest inhibits all others, and program selects it.

Perceptron

Network input-output devices {Perceptron} can alter connections or connection strengths by adjusting weights based on input, using feedback that output was correct or incorrect compared to ideal output {Perceptron learning rule}. Ideal output is one pattern or separable linear patterns. In initial learning period, Perceptrons adjust weights. In later test period, Perceptrons send more or less correct output for inputs.

random graph

Systems {random graph} can have nodes and edges with no organization and no order.

clustering

If edge number is smaller than half node number, so edge-to-node ratio is less than 0.5, few nodes cluster, largest cluster is small, and most nodes do not connect to other nodes. Connection growth rate is greatest as edge-to-node ratio increases from 0.5 to 0.6. If edge-to-node ratio equals 0.6 system has phase transition, most nodes cluster, largest cluster is big, and most nodes connect to other nodes. After that, growth slows, because most nodes have connections already.

node number

If node number is small, phase transition has wider edge-to-node ratio. If node number is large, phase transition has smaller ratio.

reaction graph

Graphs {reaction graph} can have nodes that are compounds, such as polymers, and connectors that are reactions.

nodes

Polymers have different lengths. Polymers are products of reactant polymers. Compounds can supply energy to make polymers. Required compounds are polymer units and do not have input reaction paths. Some compounds are never reactants and have no output reaction paths.

connectors

Connectors lead from reactant nodes to product nodes {reaction path}. Reactions can lengthen or shorten polymers. Lengthening polymers requires energy. Shortening polymers releases energy.

process

If existing polymers can make more-complex polymers, number of compound nodes increases and number of reactions increases more. If many different reactions make and break polymers, reaction-to-compound ratio increases exponentially.

catalyst

Reactions can have input catalysts that increase reaction rate but that reaction does not consume. Reaction-graph subsets {catalyzed reaction subgraph} can have only catalyzed reactions.

autocatalytic

Systems {autocatalytic system, reaction graph} can have reactions whose products are reactants and increase rates of reactions that depend on concentration. Autocatalytic systems consume original reactants quickly and end quickly. Catalyzed reaction subgraphs {autocatalyzed subset} are self-catalyzing if all compounds are either food or are catalysis products.

If existing polymers can make more-complex polymers, a small percentage of new polymers can be catalysts. When catalyzed-reaction number becomes more than polymer number, phase transition goes to autocatalytic system.

If autocatalytic systems have food and energy compounds, number of different polymers can increase. If system can double, probably requiring specialized catalysts and molecules, it has reproduced itself {self-reproducing}.

critical

Reaction graphs can have chain reactions {criticality, reaction}. Chain reactions can make same molecules {subcriticality}, so number increases exponentially. Chain reactions can make new molecule types {supracriticality}, so chain reactions make exponentially more types. If different-object-type number increases, supracritical behavior increases. If reaction-catalysis probability increases, supracritical behavior increases.

transition network

Networks {transition network} can represent objects as nodes and conjunctions between objects as arcs.

purposes

Transition networks can model binary object conjunctions by AND operations. Transition networks can represent processes. Transition networks do not model object quantities. Transition networks do not model disjunctions, such as inclusive OR.

transition

System states have specific nodes and arcs. Transition from one state to another state has probability. Transitions have directions. There can be final state. Transitions can depend on time, properties, and previous transitions.

comparison

Transition networks relate to algorithms and grammars.

3-Computer Science-System Analysis-Network-Boolean

Boolean network

Networks {Boolean network} can have nodes that are either on 1 or off 0. Inputs from other nodes determine node state. Boolean rules can make values 0 or 1 equally likely, make value 0 certain, make value 1 certain, or make any probability. If average Boolean rule makes value 0 or 1 almost certain, system is stable. If average value makes 0 and 1 equally likely, system is unstable. At one probability, system switches rapidly from order to chaos.

canalyzing

Boolean networks {canalyzing Boolean function} can have input that determines node output. For example, OR has two inputs and is canalyzing, because if either input is 1, output is 1. EXCLUSIVE OR is not canalyzing, because inputs depend on each other. If input number is two, 14 of 16 possible Boolean functions are canalyzing. EXCLUSIVE OR and IF AND ONLY IF are non-canalyzing. For Boolean functions with more than two inputs, few are canalyzing. Canalyzing functions have fewer interactions and so are simpler.

3-Computer Science-System Analysis-Network-Neural Net

neural network

Interconnected units {neural network}| can adjust connection strengths to model processes or representations.

Each input-layer unit sends its signal intensity to all middle-layer units, which weight each input.

Each middle-layer unit sends its signal intensity to all output-layer units, which weight each input.

The system can use feedback [Hinton, 1992], feed-forward, and/or human intervention to adjust weights (connection strengths).

To calculate adjusted weight W' using feedback, subtract constant C times partial derivative D of error function e from original weight W: W' = W - C * D(e). The program or programmer can calculate constant C and error function e.

Alternatively, to calculate adjusted weight W' using feedback, add original weight W and constant C times the difference of the current amount c and estimated true amount t: W' = W + C * (c - t). The program or programmer can calculate constant C and estimated t.

Widrow-Huff procedure uses f(s) = s: W' = W + c * (d - f) * X, where d is value and X is factor.

Generalized delta procedure uses f(s) = 1 / (1 + e^-s): W' = W + c * (d - f) * f(1 - f) * X.

Input patterns and output patterns are vectors, so neural networks transform vectors (and so are like tensors). Computation can be serial or parallel (parallel processing).

Note: Units within a layer typically have no connections [Arbib, 2003].

Output units can represent one of the possible input patterns. For example, if the system has 26 output units to detect the 26 alphabet letters, for input pattern A, its output unit is on, and the other 25 output units are off.

Output unit values can represent one of the possible input patterns. For example, if the system has 1 output unit to detect the 26 alphabet letters, for input pattern A, output unit value is 1, and for input pattern Z, output unit value is 26.

The output pattern of the output layer can represent one of the possible input patterns. For example, if the system has 5 output units to detect the 26 alphabet letters, for input pattern A, the output pattern is binary number 00001 = decimal number 1, where 0 is off, 1 is on, and the code for A is 1, code for B is 2, and so on. For input pattern Z, the output pattern is binary number 11010 = decimal number 26.

Output-pattern values can represent one of the possible input patterns. For example, if the system has 2 output units to detect the 26 alphabet letters, for input pattern A, output-pattern value is 01, and for input pattern Z, output-pattern value is 26.

For an analog system, the output pattern of the output layer can resemble an input pattern. For example, to detect the 26 alphabet letters, the system can use 25 input units and 25 output units. For input pattern A, the output pattern resembles A. Units can have continuous values for different intensities.

uses

Neural networks can model processes or representations.

Large neural networks can recognize more than one pattern and distinguish between them.

They can detect pattern unions and intersections. For example, they can recognize words.

Neural networks can recognize patterns similar to the target pattern, so neural networks can generalize to a category. For example, neural networks can recognize the letter T in various fonts.

Because neural networks have many units, if some units fail, pattern recognition can still work.

Neural networks can use many different functions, so neural networks can model most processes and representations. For example, Gabor functions can represent different neuron types, so neural networks can model brain processes and representations.

Neural networks can use two middle layers, in which recurrent pathways between first and second middle layer further refine processing.

vectors

Input patterns and output patterns are vectors (a, b, c, ...), so neural networks transform vectors and so are like tensors.

feedforward

Neural networks use feed-forward parallel processing.

types: non-adaptive

Hopfield nets do not learn and are non-adaptive neural nets, which cannot model statistics.

types: adaptive

Adaptive neural nets can learn and can model statistical inference and data analysis. Hebbian learning can model principal-component analysis. Probabilistic neural nets can model kernel-discriminant analysis. Hamming net uses minimum distance.

types: adaptive with unsupervised learning

Unsupervised learning uses only internal learning, with no corrections from human modelers. Adaptive Resonance Theory requires no noise to learn and cannot model statistics. Linear-function models, such as Learning Matrix, Sparse Distributed Associative Memory, Fuzzy Associative Memory, and Counterpropagation, are feedforward nets with no hidden layer. Bidirectional Associative Memory uses feedback. Kohonen self-organizing maps and reinforcement learning can model Markov decision processes.

types: adaptive with supervised learning

Supervised learning uses internal learning and corrections from human modelers. Adaline, Madaline, Artmap, Backpropagation, Backpropagation through time, Boltzmann Machine, Brain-State-in-a-Box, Fuzzy Cognitive Map, General Regression Neural Network, Learning Vector Quantization, and Probabilistic Neural Network use feedforward. Perceptrons require no noise to learn and cannot model statistics. Kohonen nets for adaptive vector quantization can model K-means cluster analysis.

brains compared to neural networks

Brains and neural networks use parallel processing, can use recurrent processing, have many units (and so still work if units fail), have input and output vectors, use tensor processing, can generalize, can distinguish, and can use set union and intersection.

Brains use many same-layer neuron cross-connections, but neural networks do not need them because they add no processing power.

The neural-network input layer consists of cortical neuron-array registers that receive from retina and thalamus. Weighting of inputs to the middle layer depends on visual-system knowledge of information about the reference beam. The middle layer is neuron-array registers that store perceptual patterns and make coherent waves. The output layer is perceptions in mental space.

Neurons are not the input-layer, middle-layer, or output-layer units. Units are abstract registers that combine and integrate neurons to represent (complex) numbers. Input layer, middle layer, and output layer are not physical arrays but programmed arrays (in visual and association cortex).

Neural-network processing is not neural processing. Processing uses algorithms that calculate with the numbers in registers. Layers, units, and processing are abstract, not directly physical.

nerve net

Models {nerve net}| can simulate object-recognition neuron networks. Nerve nets assign weight to nodes. Node input intensity multiplies weight to give output. Vector sum of outputs over nodes is object representation.

Hopfield network

Networks {Hopfield network} can use content-addressable memory, with weighted features and feedback.

simple recurrent network

Unit sets {context layer, network} can receive a hidden-layer copy and then add back to hidden layer {simple recurrent network}.

tensor network theory

Geometrical neural nets {tensor network theory} can make space-time coordinate transformations [Pellionisz and Llinas, 1982].

3-Computer Science-System Analysis-Phase Space

phase space of system

Systems have things with features. System models {phase space, system}| can use abstract-space nodes to represent things and can use dimensions to represent features or factors. Nodes can be system states. Similar states are near each other. Low-dimension systems have less information about nodes, because nodes have fewer factors. High-dimension systems have more node information. With more dimensions, phase-space-model predictability declines, information flows increase, and mixing increases.

plotting in phase space

Phase space can represent variable values over time {plotting}|. First dimension is for value at time t, second dimension is for value at time t + 1, and so on. For simple processes, phase spaces can have characteristic shapes.

projection in phase space

Phase-space points or trajectories can project onto two-dimensional surfaces or three-dimensional solids {projection, phase space}. New dimensions are phase-space-dimension composites.

return map

Phase spaces can have cross-sections {return map} {Poincaré map} {Poincaré section} of fewer dimensions. Phase-space points, nodes, and trajectories can project onto cross-section.

sparse population coding

Phase space can have only several widely separated nodes {sparse population coding}. Nodes can represent complex states. States are not similar to other states. Sparse population coding can be for pattern or object recognition [Kanerva, 1988].

3-Computer Science-System Analysis-Components

separability of modules

Systems can have independent modules {separability} {component separability} {module separability}. Modules are element groups. Connected-node groups can have few connections to rest of system. Modules are information-processing functions, which transfer input to output. Module outputs are input to other modules and same module. Module-relation pattern determines how modules communicate and affect each other {correspondence rule, module}.

component hierarchy

Many systems have lower-level and higher-level components {component hierarchy} [Pattee, 1973] [Pattee, 1995].

conflict of subsystems

Hierarchies always have conflicts {conflict, hierarchy} between subsystems at same level. Higher agents settle conflicts using laws, policies, communication, power, compromise, and mediation.

3-Computer Science-System Analysis-Controls

system controls

Subsystems {control subsystem} {system controls} at same level inhibit each other. Higher subsystems compare lower-subsystem outputs over time and space, to send summaries to even higher subsystems.

control problem

Inputs can control system performance {control problem}. Control can keep output near reference value {regulator problem}. Control can follow trajectories {tracking problem}. Good control methods use independent positive and negative signals with wavelength and amplitude ranges. Such control signals have close control, smooth response, and good sensitivity.

control express line

Paths {express line} {control express line}, from low-level to high-level nodes, suggest, find, activate, prime, or inhibit hypotheses, models, or patterns. Paths, from high-level to low-level nodes, find, activate, prime, or inhibit lower-level nodes or indexes, for searches.

3-Computer Science-System Analysis-Controls-Feedback

feedback mechanism

Some output {feedback} can be input to regulate output [Wiener, 1947]. Feedback can compensate for minor departures from output level.

loop

Feedback loop continuously measures output {indicator, control}, modifies input {executive organ, control}, connects indicator and executive organ {transmitter, control}, and supplies energy {motor}. System parts {feedback mechanism} can subtract actual from intended output and send more or less signal to decrease differences, using algorithms {identification algorithm}.

setting

Feedback refines behavior but does not set behavior level, which is set manually.

signal

Too-great signals {overcompensation} cause cycles, as output overshoots intended output. Too-small signals {undercompensation} are not enough to overcome noise or inaccuracies and so fail to return system to expected performance.

feedforward

Classification algorithms can use prototypes or templates {feedforward}. Classification results when stimulus parts closely match prototype or template parts.

feedforward mechanism

Input can regulate output {feedforward mechanism}, by sending signals based on system states and environment to enhance or initiate actions. Feedforward sets output level based on algorithm or system model. After sending feedforward signal, system sends no more signals for a time {refractory period, feedforward}, to allow time to check first-signal results.

examples

Feedforward classification algorithms include feature-based winner-take-all algorithms {Pandemonium algorithm}, feedforward neural nets using feedback during learning {backpropagation, feedforward}, tree-based classifiers, and parametric statistical modeling [Selfridge, 1970].

homeostasis in system

Mechanisms {homeostasis, system} can use feedback controls.

negative feedback

Feedback {negative feedback} can dampen responses to maintain goal level. Negative-feedback algorithms can use different comparator types. ON-OFF regulation uses a constant set point. Proportional regulation uses variable set points. For constant disturbances, integral regulation uses constant set points, but output change rate is proportional to input. Derivative regulation uses input-change rate, proportional to output [Kampis, 1991].

positive feedback

Feedback {positive feedback} can synchronize events and deliver maximum response quickly, good for behavior rituals.

3-Computer Science-System Analysis-Functions

system functions

Systems have functions {system functions}. Linear functions, such as polynomials or series functions, sum weighted harmonic frequencies or weighted power functions. Linear functions can be scalars or vectors.

cross-correlation function

Feature values can relate in direct or inverse proportion {cross-correlation function}.

curl function

Potentials are vectors. Field or potential cross product of del and f {curl, function} describes area density. Curl is vector-field rotation rate {circulation density}, with magnitude and direction. Curl is non-linear. Using no coordinates, curl is limit, as volume goes to zero, of surface integral over closed surface, of cross product of unit outward-normal vector and function, all divided by volume. Find region-boundary function value and then divide by region volume.

divergence function

Function dot product of del and f {divergence} describes vector-potential flow or flux. Divergences are scalars. Positive divergence means diverging. Negative divergence means converging. Divergence is limit, as volume goes to zero, of double integral over surface area of dot product of vector field and surface-area differential, all divided by volume. Divergence is limit, as volume goes to zero, of surface integral over closed surface of dot product of unit outward-normal vector and function, all divided by volume. Find region-boundary function value and then divide by region volume.

gradient function

Function del f {gradient, function} describes field or potential changes over space. Gradients are maximum field direction and magnitude vectors. Field or potential can be scalar or vector. Gradients are non-linear. Using no coordinates, gradient is limit, as volume goes to zero, of surface integral, over closed surface, of product of unit outward-normal vector and function. Find region-boundary function value and then divide by region volume.

radial basis function

Multivariate functions {radial basis function} (RBF) can be weighted sums of independent linear functions.

input

Inputs can be spatial coordinates, angles, line-segment lengths, colors, segment configurations, feature binocular disparities, or texture descriptions. Training uses input data points.

dimensions

Data points have distances from coordinate means: |x - t|, where x are data-point coordinate values, and t are coordinate means. Data typically has Gaussian distribution, which can be broad or narrow, along all dimensions. Dimension number is typically less than data-point number.

training

Training assigns weights to dimensions or factors.

test sum

Test data point has sum over all weighted dimensions. Comparing sum to input data-object sums can identify test object. For narrow Gaussian distributions, RBF is like lookup table, because test objects only match if input equals mean.

hyperbolic basis function

Functions {hyperbolic basis function} (HBF) can allow more flexibility than radial basis functions. Networks can express weighted function f*(x) = summation over i from 1 to N of c(i) * G(transpose of (x - t(i)) * transpose of W * W * (x - t(i))) + p(x), where x are data-point values, t(i) are means or centers, c(i) are weights or coefficients, p(x) sets f(x) = 0, G is Gaussian distribution, and W is square matrix. Norm has weights: transpose of (x - t(i)) * transpose of W * W * (x - t(i)).

3-Computer Science-System Analysis-Operation

system operation

Systems have operations {system operation}.

growth

Systems can start slow, grow, and then level off, in a sigmoid curve. Systems can have linear growth, at constant rate. Systems can have exponential growth, at rate that depends on current size. Systems can have second-order exponential growth, at rate that depends on current size squared. Systems can have differential growth, in which different parts have different growth rates.

newness

To have new behavior, complex systems require information from outside system.

outside influences

Complex systems protect against changes from outside by storing information about current system state, typically in templates, and replacing changed states or parts using that information.

prediction

Machines can predict variable value, then check actual value against predicted value, then adjust prediction method. Process requires information about current system state, including current variable value. Process requires information about factors affecting output value. Process requires information about goal value. Models can use varying inputs and model-parameter configurations to find output values and compare values to actual values. Machine or researcher can then adjust model.

randomness

Randomness directly relates to information. Random events can add information to system. If a random event with low probability happens, it has high information. New information masks non-random information. New information can make systems less stable.

Random nodes or events have different time and space distributions, with different noise types. Random nodes or events can have different distributions, depending on system size or time scale. Fractals can model random events with same distribution at different scales.

rates

Going from one state to another, such as returning from non-equilibrium state to equilibrium state, requires time. Systems can move from one state to another at different speeds. Systems can use internal and external rate controls. Systems have time scales.

representations

In complex systems, several configurations and parts can typically perform same function or hold same representation. Redundancy causes some paths and functions to be equivalent. The same output can result from different processing paths.

rules

Systems have rules, exceptions to rules, checks, and balances.

symbols

System only recognize and use inside symbols. Systems cannot use symbols from outside system. Outside symbols must translate into inside symbols. Systems have mechanisms or filters to receive outside data, extract outside symbols, and translate them to inside symbols.

time

Systems and networks require time controller to coordinate data flows.

critical relation

Relations {critical relation} can control relation series. Certain interactions can die out or reach limiting value {lock-in}. Relation can prevent or cause another relation {double bind}. Relation can diverge further, or increase intensity, in reaction to another relation {schismogenesis} [Bateson, 1972] [Bateson, 1979].

equilibrium transient

Systems can slowly reach new state, oscillate toward new state, or have other transient behavior {equilibrium transient}.

reliability of system

Systems are more reliable {reliability, system}| if they perform just one function, rather than several different functions. Systems are more reliable if they perform same function more than once. Systems are more reliable if they use random input samples, rather than only one input. Systems are more reliable if they can perform function subfunctions using non-random input, then combine sub-outputs with consistency and completeness to get whole output.

robustness

Robust systems have independent parts {robustness, system}|, rather than mutually dependent parts.

3-Computer Science-System Analysis-States

system state of system

At all instants, elements, objects, events, and properties have variable values. Variable-value sets are system states {state, system} {system state}. Systems can be in stable, unstable, or cyclic equilibrium or in stable steady state.

3-Computer Science-System Analysis-States-Trajectory

trajectory in systems

Systems have state sequences {trajectory, system}|. Trajectories are paths through phase space.

progression in systems

Systems can go to final state, repeat state, or never repeat, using constantly interfering states. System can have interactions that make states that interact to form temporary substates {progression}.

state transition graph

Tables {state transition graph} show all possible state transitions.

ballistic trajectory

System behaviors, once begun, can have no further regulation and so follow trajectory {ballistic trajectory}|.

ergodic process

States can have trajectories from all other states {ergodic process}|. Trajectories can have equal or unequal probabilities. Ergodic processes always have non-recurring loops, because system eventually returns to previous states. Because probabilistic, ergodic processes do not have recurrence.

reversible trajectory

If they have no attractors, have no loops, are deterministic, and have conservation, systems can run in reverse {reversible trajectory}.

3-Computer Science-System Analysis-States-Trajectory-Terminus

attractor in systems

Over time, systems tend to move to terminal state {attractor}|, such as constant flow. Trajectories near state tend toward state {attraction basin} {basin of attraction}. Attractors that have more trajectories going to them have higher probability.

catastrophe in system

Trajectories can result in state {catastrophe, system}| that stops trajectory or changes available states.

chaos trajectory

Trajectories can go from one state to another, with no trend toward final state {chaos, system}| [Gleick, 1987] [Lorenz, 1963].

dissipative system

Ordered non-equilibrium systems {dissipative system} can have steady state, because matter and/or energy flow through.

stability in systems

Stable systems {stability, system} have attractors with short state-cycle length and large attraction basins, so changing from one state to nearby states, or changing transition rule slightly, leaves system in same attraction basin. In unstable systems, if state or transition rule slightly changes, system changes to long state-cycle length {chaos} or changes attraction basins {catastrophe}. If network is sparsely connected, network tends to stay in same attraction basin. If network is densely connected, network tends to be chaotic or catastrophic.

state cycle

In deterministic complex systems, trajectories tend to go to repeated states {state cycle}| {recurrence} {oscillator, system}. Examples are pendulums, fluids, circuits, and lasers.

length

Cycles have number of steps {length, cycle}. If nodes have input from all other nodes, length is square root of number of states and is large, and number of attractors is number of nodes divided by natural-logarithm base e and is small. If nodes have two inputs, length is square root of number of nodes and is small, and number of attractors is number of nodes divided by natural-logarithm base e and is small.

3-Computer Science-System Analysis-Patterns

compact pattern

Patterns {compact pattern} can have only points, which connect horizontally or vertically to at least one other point.

value

Patterns have numerical values.

boundary

Unique patterns have unique boundaries and surfaces. If surface has value, pattern has value, and vice versa.

differences

Changing one pattern to another adds or subtracts one point.

number

Same number of points has same number of patterns. 1 point has 1 possible pattern. 2 points have 1 possible pattern. 3 points have 2 possible patterns. 4 points have 5 possible patterns. 5 points have 12 possible patterns. 6 points have 35 possible patterns. 7 points have 108 possible patterns. 8 points have 369 possible patterns. 9 points have 1285 possible patterns. 10 points have 4655 possible patterns. 11 points have 17072 possible patterns. 12 points have 63565 possible patterns. 13 points have 238299 possible patterns.

number: multiples

Multiple for each step is 1, 2.00, 2.50, 2.40, 2.92, 3.09, 3.41, 3.48, 3.62, 3.67, 3.72, and 3.75. Multiple for odd steps is 2, 6, 9, 11.89, 13.29, and 13.96. Multiple for even steps is 5, 7, 10.54, 12.61, and 13.66.

factoring

Pattern-number factors are 1*1, 2*1, 5*1, 3*2*2, 7*5, 3*3*3*2*2, 41*3*3, 257*5, 19*7*7*5, 97*11*2*2*2*2, 12713*5, and 79433*3.

point types

Patterns can have different point types, such as colors. For three point types, 1 point has 3 possible patterns. 2 points have 6 possible pattern. 3 points have 36 possible patterns. 4 points have 246 possible patterns. 5 points have 2115 possible patterns. Multiple for each step is 2.00, 6.00, 6.83, and 8.60. Multiple for odd steps is 12 or 58.75. Multiple for even steps is 41. Pattern-number factors are 3*1, 3*2, 3*2*2, 41*3*2, and 47*5*3*3.

transformations

Pattern translation, reflection, rotation, and inversion make same pattern.

diagonals

If points can connect diagonally, patterns are not fundamentally different, only less compact. Patterns with points connected diagonally can transform to connect only horizontally or vertically. For example, V is L rotated 45 degrees.

unique pattern representation

To be unique, pattern representations {unique pattern representation} must use pattern center and pairwise relations between points.

center

The center is x, y, and z coordinate means plus unit distance along higher dimension. The extra dimension avoids false equivalences that can happen if center lies near point.

vectors

Vectors go from center to pattern points. For vector pairs, find something like cross product. Calculate each pair only once, not again for different order. Ignore unit vectors. Square difference, such as x1*y2 - x2*y1, assigned to unit vectors. Sum squares. Alternatively, use sum square root.

value

Add all cross products. Resulting sum is unique for compact pattern.

program 1

In array, in first coordinate, 0 is for y-coordinate, and 1 is for x-coordinate. k = pattern size. p = pattern-point number. max(p) = k. m = pattern number. n(#, k, p, m) are point coordinates. v = 0. v1 = 0. v2 = 0.

For p = 1 To k + 1. v = v + n(0, k, p, m). v1 = v1 + n(1, k, p, m). Next p. v = v / (k + 1). v1 = v1 / (k + 1). For p = 1 To k. For p1 = p + 1 To k + 1. x1 = n(0, k, p, m) - v. x2 = n(1, k, p, m) - v1. x3 = 1. y1 = n(0, k, p1, m) - v. y2 = n(1, k, p1, m) - v1. y3 = 1. w2 = ((x2 * y3 - y2 * x3) ^ 2 + (x1 * y3 - y1 * x3) ^ 2 + (x1 * y2 - y1 * x2) ^ 2) ^ 0.5. v2 = v2 + w2. Next p1. Next p.

Program compares patterns rapidly.

eye

Eye can perform this computation, because it is pattern center, and all points are in front of it. Eye can compare patterns and judge distances.

program 2

Patterns with different colors or point types can have pattern representations. In first coordinate, 0 is for y-coordinate, 1 is for x-coordinate, and 2 is for color or point type. k = pattern size. p = pattern point number; max(p) = k. m = pattern number. n(#, k, p, m) are coordinates and point type. v = 0. v1 = 0. v2 = 0.

For p = 1 To k + 1. v = v + n(0, k, p, m). v1 = v1 + n(1, k, p, m). Next p. v = v / (k + 1). v1 = v1 / (k + 1). For p = 1 To k. For p1 = p + 1 To k + 1. x1 = n(0, k, p, m) - v. x2 = n(1, k, p, m) - v1. x3 = 1. y1 = n(0, k, p1, m) - v. y2 = n(1, k, p1, m) - v1. y3 = 1. w2 = ((n(2, k, p, m) + 1) * (n(2, k, p1, m) + 1) + (x2 * y3 - y2 * x3) ^ 2 + (x1 * y3 - y1 * x3) ^ 2 + (x1 * y2 - y1 * x2) ^ 2) ^ 0.5. v2 = v2 + w2. Next p1. Next p.

order group

Sets can have symbols in sequence {order group}, such as pattern or k-tuple.

subsets

The set can have subsets. Subsets are symbol sets in sequences and patterns.

group

All subsets form order groups. For example, pattern "acg" has subsets NULL, "a", "c", "g", "ac", "cg", and "acg". Order groups contain null set and pattern. If sets can be circular, the set can have subsets "ga", "cga", and "gac".

equivalence

Rules can be that patterns are equivalent over gaps and insertions, so "acg" = "ac gX". Gaps or insertion size or number can have restrictions.

alignment

Two patterns share largest subset. Two patterns share two largest equivalent subsets at optimum alignment.

process

To compare patterns, add or remove gaps and insertions from both patterns to find largest subset. If symbols are dimensions, spaces have maximum number of shared dimensions and minimum number of new dimensions.

index

Indexes are part of, and have position in, patterns. Pattern symbols have one or more indices. In pattern "acgta", symbol "a" is at position 1 and 5. Pattern subsets start at one or more indices. In pattern "acgta", subset "ac" starts at position 1.

union

Combining patterns results in new symbol sequences and patterns. Start with first pattern and add new symbols in sequence. Discard symbols that are the same in sequence, for example, "abc" and "ag" nets "abcg". It is like union of sets but with order in elements.

Combining is associative but not commutative. Null pattern combines with pattern to give same pattern. The same pattern combines with itself to give same pattern. Inverse pattern combines with pattern to give null pattern, but there can be no inverse pattern.

intersection

Finding largest aligned subset is like set intersection. Aligning "abc" and "ag" nets "a".

Aligning is associative and commutative. Null pattern aligns with pattern to give null pattern. The same pattern aligns with itself to give same pattern. Inverse pattern combines with pattern to give null pattern.

conversion

Natural or artificial objects, events, lines, surfaces, solids, n-dimensional figures, geometric points, figures, or images can be linear single-symbol series and so can be patterns. Patterns have order groups, and so all things can align. For example, letter "a" can stand for angle of 45 degrees and letter "L" can stand for angle of 90 degrees, so pattern "aLa" can stand for right triangle with two 45-degree angles.

Symbol sequences can transform into symbol group sequences. For example, pattern "acgta" has three-symbol subsets, "acg", "cgt", and "gta", rather than a five-symbol sequence. Subsets can align rather than single symbols.

Two objects or events can transform into linear RNA-base sequences. They can align by hybridization.

length vs. symbol number

Things can use patterns with few symbols and long sequences or shorter sequences with more symbols.

brain

Perhaps, brain can compare patterns using order groups.

Related Topics in Table of Contents

3-Computer Science

Drawings

Drawings

Contents and Indexes of Topics, Names, and Works

Outline of Knowledge Database Home Page

Contents

Glossary

Topic Index

Name Index

Works Index

Searching

Search Form

Database Information, Disclaimer, Privacy Statement, and Rights

Description of Outline of Knowledge Database

Notation

Disclaimer

Copyright Not Claimed

Privacy Statement

References and Bibliography

Consciousness Bibliography

Technical Information

Date Modified: 2022.0225